210 research outputs found

    Improved navigation by combining VOR/DME information with air or inertial data

    Get PDF
    The improvement was determined in navigational accuracy obtainable by combining VOR/DME information (from one or two stations) with air data (airspeed and heading) or with data from an inertial navigation system (INS) by means of a maximum-likelihood filter. It was found that the addition of air data to the information from one VOR/DME station reduces the RMS position error by a factor of about 2, whereas the addition of inertial data from a low-quality INS reduces the RMS position error by a factor of about 3. The use of information from two VOR/DME stations with air or inertial data yields large factors of improvement in RMS position accuracy over the use of a single VOR/DME station, roughly 15 to 20 for the air-data case and 25 to 35 for the inertial-data case. As far as position accuracy is concerned, at most one VOR station need be used. When continuously updating an INS with VOR/DME information, the use of a high-quality INS (0.01 deg/hr gyro drift) instead of a low-quality INS (1.0 deg/hr gyro drift) does not substantially improve position accuracy

    Real-time human action recognition on an embedded, reconfigurable video processing architecture

    Get PDF
    Copyright @ 2008 Springer-Verlag.In recent years, automatic human motion recognition has been widely researched within the computer vision and image processing communities. Here we propose a real-time embedded vision solution for human motion recognition implemented on a ubiquitous device. There are three main contributions in this paper. Firstly, we have developed a fast human motion recognition system with simple motion features and a linear Support Vector Machine (SVM) classifier. The method has been tested on a large, public human action dataset and achieved competitive performance for the temporal template (eg. “motion history image”) class of approaches. Secondly, we have developed a reconfigurable, FPGA based video processing architecture. One advantage of this architecture is that the system processing performance can be reconfiured for a particular application, with the addition of new or replicated processing cores. Finally, we have successfully implemented a human motion recognition system on this reconfigurable architecture. With a small number of human actions (hand gestures), this stand-alone system is performing reliably, with an 80% average recognition rate using limited training data. This type of system has applications in security systems, man-machine communications and intelligent environments.DTI and Broadcom Ltd

    Human Intent Prediction Using Markov Decision Processes

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/97080/1/AIAA2012-2445.pd

    Fusion of Single View Soft k-NN Classifiers for Multicamera Human Action Recognition

    Get PDF
    Proceedings of: 5th International Conference on Hybrid Artificial Intelligence Systems (HAIS 2010). San Sebastián, Spain, June 23-25, 2010This paper presents two different classifier fusion algorithms applied in the domain of Human Action Recognition from video. A set of cameras observes a person performing an action from a predefined set. For each camera view a 2D descriptor is computed and a posterior on the performed activity is obtained using a soft classifier. These posteriors are combined using voting and a bayesian network to obtain a single belief measure to use for the final decision on the performed action. Experiments are conducted with different low level frame descriptors on the IXMAS dataset, achieving results comparable to state of the art 3D proposals, but only performing 2D processing.This work was supported in part by Projects CICYT TIN2008-06742-C02-02/TSI, CICYT TEC2008-06732-C02-02/TEC, CAM CONTEXTS (S2009/TIC-1485) and DPS2008-07029-C02-02Publicad

    GRIDDS - A Gait Recognition Image and Depth Dataset

    Get PDF
    Several approaches based on human gait have been proposed in the literature, either for medical research reasons, smart surveillance, human-machine interaction, or other purposes, whose validation highly depends on the access to common input data through available datasets, enabling a coherent performance comparison. The advent of depth sensors leveraged the emergence of novel approaches and, consequently, the usage of new datasets. In this work we present the GRIDDS - A Gait Recognition Image and Depth Dataset, a new and publicly available gait depth-based dataset that can be used mostly for person and gender recognition purposes. (c) Springer Nature Switzerland AG 2019

    A chemotactic-based model for spatial activity recognition

    Full text link
    Spatial activity recognition in everyday environments is particularly challenging due to noise incorporated during video-tracking. We address the noise issue of spatial recognition with a biologically inspired chemotactic model that is capable of handling noisy data. The model is based on bacterial chemotaxis, a process that allows bacteria to survive by changing motile behaviour in relation to environmental dynamics. Using chemotactic principles, we propose the chemotactic model and evaluate its classification performance in a smart house environment. The model exhibits high classification accuracy (99%) with a diverse 10 class activity dataset and outperforms the discrete hidden Markov model (HMM). High accuracy (>89%) is also maintained across small training sets and through incorporation of varying degrees of artificial noise into testing sequences. Importantly, unlike other bottom–up spatial activity recognition models, we show that the chemotactic model is capable of recognizing simple interwoven activities

    Multicamera Action Recognition with Canonical Correlation Analysis and Discriminative Sequence Classification

    Get PDF
    Proceedings of: 4th International Work-Conference on the Interplay Between Natural and Artificial Computation, IWINAC 2011, La Palma, Canary Islands, Spain, May 30 - June 3, 2011.This paper presents a feature fusion approach to the recognition of human actions from multiple cameras that avoids the computation of the 3D visual hull. Action descriptors are extracted for each one of the camera views available and projected into a common subspace that maximizes the correlation between each one of the components of the projections. That common subspace is learned using Probabilistic Canonical Correlation Analysis. The action classification is made in that subspace using a discriminative classifier. Results of the proposed method are shown for the classification of the IXMAS dataset.Publicad

    Inductive learning spatial attention

    Get PDF
    This paper investigates the automatic induction of spatial attention from the visual observation of objects manipulated on a table top. In this work, space is represented in terms of a novel observer-object relative reference system, named Local Cardinal System, defined upon the local neighbourhood of objects on the table. We present results of applying the proposed methodology on five distinct scenarios involving the construction of spatial patterns of coloured blocks
    corecore